Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function
نویسندگان
چکیده
We propose and analyze a class of penalty-function-free nonmonotone trust-region methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint violation and the value of the Lagrangian function. Similar to the Byrd–Omojokun class of algorithms, each step is composed of a quasinormal and a tangential step. Both steps are required to satisfy a decrease condition for their respective trust-region subproblems. The proposed mechanism for accepting steps combines nonmonotone decrease conditions on the constraint violation and/or the Lagrangian function, which leads to a flexibility and acceptance behavior comparable to filter-based methods. We establish the global convergence of the method. Furthermore, transition to quadratic local convergence is proved. Numerical tests are presented that confirm the robustness and efficiency of the approach.
منابع مشابه
On Efficiency of Non-Monotone Adaptive Trust Region and Scaled Trust Region Methods in Solving Nonlinear Systems of Equations
In this paper we run two important methods for solving some well-known problems and make a comparison on their performance and efficiency in solving nonlinear systems of equations. One of these methods is a non-monotone adaptive trust region strategy and another one is a scaled trust region approach. Each of methods showed fast convergence in special problems and slow convergence in other o...
متن کاملCombining filter and non-monotone trust region algorithm for nonlinear systems of equalities and inequalities
In this paper, we combine filter and non-monotone trust region algorithm for nonlinear systems of equalities and inequalities. The systems of equalities and inequalities are transformed into a continuous equality constrained optimization solved by the new algorithm. Filter method guarantees global convergence of the algorithm under appropriate assumptions. The second order correction step is us...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملConvergence to a Second-order Point of a Trust-region Algorithm with a Nonmonotonic Penalty Parameter for Constrained Optimization Convergence to a Second-order Point of a Trust-region Algorithm with a Nonmonotonic Penalty Parameter for Constrained Optimization 1
A Abstract In a recent paper, the author (Ref. 1) proposed a trust-region algorithm for solving the problem of minimizing a non-linear function subject to a set of equality constraints. The main feature of the algorithm is that the penalty parameter in the merit function can be decreased whenever it is warranted. He studied the behavior of the penalty parameter and proved several global and loc...
متن کاملA New Trust-Region Algorithm for Equality Constrained Optimization
We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global convergence. Under certain assumptions we prove that this algorithm globally converges to a point ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Math. Program.
دوره 95 شماره
صفحات -
تاریخ انتشار 2003